5. Cross Product

a. Determinants

3. Properties of Determinants

We here list the properties of determinants. These properties are true for any size determinant, but we will prove them only for \(3\times3\) determinants. An asterisk * appears on the most important properties. The others are not needed for this course. Some of these properties (especially those called row operations) can be used to greatly simplify the computation of a determinant.

First a definition:

The transpose of a matrix is the matrix with the rows and columns interchanged. Alternatively, it is the matrix flipped on the principal diagonal. For a \(3\times3\) matrix: \[ \begin{pmatrix} a & d & g \\ b & e & h \\ c & f & i \end{pmatrix}^{\large\intercal} =\begin{pmatrix} a & b & c \\ d & e & f \\ g & h & i \end{pmatrix} \]

  1. The determinant of the transpose is the same as the determinant of the matrix: \[ \det M^\intercal=\det M \]  

\[ \begin{vmatrix} a & d & g \\ b & e & h \\ c & f & i \end{vmatrix} =aei+dhc+gbf-gec-ahf-dbi \] \[ \begin{vmatrix} a & b & c \\ d & e & f \\ g & h & i \end{vmatrix} =aei+bfg+cdh-ceg-afh-bdi \]

These are equal.

This theorem implies that any fact we state about the rows of a determinant also holds for the columns of a determinant. So in the remaining theorems, we will only mention the rows, but the theorems also hold with rows replaced by columns.

And a couple more definitions:

A matrix is diagonal, if the only non-zero entries are on the principal diagonal. A matrix is upper (resp. lower) triangular, if all entries below (resp. above) the principal diagonal are zero.

In the following three matrices, the first is upper triangular, the second is lower triangular and the third is diagonal (as well as both upper and lower triangular). \[ \begin{pmatrix} 1 & 2 & 3 \\ 0 & 4 & 5 \\ 0 & 0 & 6 \end{pmatrix} \qquad \qquad \begin{pmatrix} 1 & 0 & 0 \\ 2 & 4 & 0 \\ 3 & 5 & 6 \end{pmatrix} \qquad \qquad \begin{pmatrix} 1 & 0 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & 6 \end{pmatrix} \]

  1. The determinant of any triangular matrix (upper, lower or diagonal) is the product of the principle diagonal entries:

For a lower triangular matrix, succesively expand on the \(1^\text{st}\) row, then \(2^\text{nd}\) row, then \(3^\text{rd}\) row, etc. You will get the product of the diagonal entries and the off-diagonal entries will never contribute. An upper triangular matrix is the transpose of a lower triangular matrix.

  1. If we interchange two rows of a matrix, its determinant changes by a minus sign.
      If we let \(E_{ij}\) denote the operation of interchanging the \(i^\text{th}\) row with the \(j^\text{th}\) row, then: \[ \det E_{ij}(M)=-\det M \]  

\(\det M= \begin{vmatrix} a & d & g \\ b & e & h \\ c & f & i \end{vmatrix} =aei+dhc+gbf-gec-ahf-dbi\)

\(\det E_{12}(M)= \begin{vmatrix} b & e & h \\ a & d & g \\ c & f & i \end{vmatrix} =bdi+egc+haf-hdc-bgf-eai=-\det M\)

\(\det E_{13}(M)= \begin{vmatrix} c & f & i \\ b & e & h \\ a & d & g \end{vmatrix} =ceg+fha+ibd-iea-chd-fbg=-\det M\)

\(\det E_{23}(M)= \begin{vmatrix} a & d & g \\ c & f & i \\ b & e & h \end{vmatrix} =afh+dib+gce-gfb-aie-dch=-\det M\)

  1. If two rows of a matrix \(M\) are identical, then: \[ \det M=0 \]  

Suppose rows \(i\) and \(j\) are identical, then interchanging these rows does nothing, \(E_{ij}(M)=M\). So on the one hand \(\det E_{ij}(M)=\det M\).

But by Theorem 3, \(\det E_{ij}(M)=-\det M\). So \(\det M=0\).

\(\begin{vmatrix} 3 & 5 & -1 \\ -2 & 1 & 4 \\ -2 & 1 & 4 \end{vmatrix} =0\)  To check, expand on the first row.

  1. If we multiply a row of a matrix by a constant \(r\), the determinant changes by a factor of \(r\).
      If we let \(E_i[r]\) denote the operation of multiplying the \(i^\text{th}\) row by \(r\), then \[ \det E_i[r](M)=r\det M \]  

\(\det M= \begin{vmatrix} a & d & g \\ b & e & h \\ c & f & i \end{vmatrix} =aei+dhc+gbf-gec-ahf-dbi\)

We only check it for row \(2\): \[\begin{aligned} \det E_2[r](M) &= \begin{vmatrix} a & d & g \\ rb & re & rh \\ c & f & i \end{vmatrix} \\ &=arei+drhc+grbf-grec-arhf-drbi \\ &=r(aei+dhc+gbf-gec-ahf-dbi) \\ &=r\det M \end{aligned}\]

This theorem is used to reduce the size of numbers in a determinant when there is a common factor in a row or column:

\[\begin{aligned} \begin{vmatrix} 2 & 8 & 2 \\ 9 & 6 & 15 \\ -2 & -2 & -3 \end{vmatrix} &=(2)(3)(-1) \begin{vmatrix} 1 & 4 & 1 \\ 3 & 2 & 5 \\ 2 & 2 & 3 \end{vmatrix} \\[5pt] &=(2)(3)(-1)(2) \begin{vmatrix} 1 & 2 & 1 \\ 3 & 1 & 5 \\ 2 & 1 & 3 \end{vmatrix} \\[5pt] &=-12\left( \begin{vmatrix} 1 & 5 \\ 1 & 3 \end{vmatrix} -2\begin{vmatrix} 3 & 5 \\ 2 & 3 \end{vmatrix} +\begin{vmatrix} 3 & 1 \\ 2 & 1 \end{vmatrix} \right) \\[5pt] &=-12[(-2)-2(-1)+(1)] =-12(1) \\ &=-12 \end{aligned}\]



Factor \(2\) from the first row, \(3\) from the second row and \(-1\) from the third row.


Factor \(2\) from the second column.


Expand on the first row.


Try computing this determinant without first factoring.

  1. If three matrices \(A\), \(B\) and \(C\) are identical except for one row, and that row of \(C\) is the sum of the corresponding rows of \(A\) and \(B\), then \[ \det C=\det A+\det B \]   In more detail for the second row: \[ \begin{vmatrix} a & d & g \\ p+s & q+t & r+u \\ c & f & i \end{vmatrix} = \begin{vmatrix} a & d & g \\ p & q & r \\ c & f & i \end{vmatrix} + \begin{vmatrix} a & d & g \\ s & t & u \\ c & f & i \end{vmatrix} \qquad \]  

For the second row, expand on the second row: \[ \begin{vmatrix} a & d & g \\ p+s & q+t & r+u \\ c & f & i \end{vmatrix} =-(p+s)\begin{vmatrix} e & h \\ f & i \end{vmatrix} +(q+t)\begin{vmatrix} b & h \\ c & i \end{vmatrix} -(r+u)\begin{vmatrix} b & e \\ c & f \end{vmatrix} \] \[ =\left( -p\begin{vmatrix} e & h \\ f & i \end{vmatrix} +q\begin{vmatrix} b & h \\ c & i \end{vmatrix} -r\begin{vmatrix} b & e \\ c & f \end{vmatrix} \right) +\left( -s\begin{vmatrix} e & h \\ f & i \end{vmatrix} +t\begin{vmatrix} b & h \\ c & i \end{vmatrix} -u\begin{vmatrix} b & e \\ c & f \end{vmatrix} \right) \] \[ =\begin{vmatrix} a & d & g \\ p & q & r \\ c & f & i \end{vmatrix} + \begin{vmatrix} a & d & g \\ s & t & u \\ c & f & i \end{vmatrix} \]

  1. If we add a multiple of one row of a matrix to another row, the determinant does not change.
      If we let \(E_{ij}[r]\) denote the operation of multipling the \(i^\text{th}\) row by \(r\) and adding that to the \(j^\text{th}\) row, then \[ \det E_{ij}[r](M)=\det M \]  

We apply Theorem 6 with \(A=M\), \(B=\) the matrix \(M\) with the \(j^\text{th}\) row replaced by \(r\) times the \(i^\text{th}\) row, and \(C=E_{ij}[r](M)\). Then \[\begin{aligned} \det C &=\det A+\det B \\ \det E_{ij}[ r](M)&=\det M+\det B \end{aligned}\] To compute \(\det B\) we use Theorem 5 to factor an \(r\) out of the \(j^\text{th}\) row leaving a matrix \(N\) which has the \(j^\text{th}\) row equal to the \(i^\text{th}\) row. By Theorem 4, \(\det N=0\). Putting all this together, we have: \[ \det E_{ij}[ r](M)=\det M+\det B=\det M+r\det N=\det M \]

We emphasize again that everything we have said about rows also applies to columns and also applies to matrices of any size.

Row Operations

Properties 3, 5 and 7 are often called Row Operations and are frequently used to simplify the computation of a determinant. Here is an example and some exercises:

Compute \(\begin{vmatrix} 4 & 1 & 2 & 3 \\ 5 & 1 & 5 & 6 \\ 6 & 1 & 2 & 7 \\ 5 & 1 & 4 & 4 \end{vmatrix}\).

First use Row Operation III (Theorem 7) to subtract the \(1^\text{st}\) row from the other three rows. Then expand on the \(2^\text{nd}\) column: \[ \begin{vmatrix} 4 & 1 & 2 & 3 \\ 5 & 1 & 5 & 6 \\ 6 & 1 & 2 & 7 \\ 5 & 1 & 4 & 4 \end{vmatrix} =\begin{vmatrix} 4 & 1 & 2 & 3 \\ 1 & 0 & 3 & 3 \\ 2 & 0 & 0 & 4 \\ 1 & 0 & 2 & 1 \end{vmatrix} =(-1)\begin{vmatrix} 1 & 3 & 3 \\ 2 & 0 & 4 \\ 1 & 2 & 1 \end{vmatrix} \] Now use Row Operation III to subtract \(2\) times the \(1^\text{st}\) column from the \(3^\text{rd}\) column and expand on the \(2^\text{nd}\) row: \[\begin{aligned} \begin{vmatrix} 4 & 1 & 2 & 3 \\ 5 & 1 & 5 & 6 \\ 6 & 1 & 2 & 7 \\ 5 & 1 & 4 & 4 \end{vmatrix} &=(-1)\begin{vmatrix} 1 & 3 & 1 \\ 2 & 0 & 0 \\ 1 & 2 & -1 \end{vmatrix} =(-1)(-2)\begin{vmatrix} 3 & 1 \\ 2 & -1 \end{vmatrix} \\ &=(-1)(-2)(-3-2)=-10 \end{aligned}\] Notice that this is much quicker than expanding on a row and computing four \(3\times3\) determinants.

In an exercise on the previous page, we computed \(\begin{vmatrix} -3 & 1 & 1 & -6 \\ -1 & 3 & 2 & 2 \\ 3 & -2 & 1 & 3 \\ 1 & -2 & -4 & 1 \end{vmatrix}\). Recompute it by first adding the first row to the third row and the second row to the fourth row.

The determinant is \(0\).

We first add the first row to the third row and the second row to the fourth row: \[ \begin{vmatrix} -3 & 1 & 1 & -6 \\ -1 & 3 & 2 & 2 \\ 3 & -2 & 1 & 3 \\ 1 & -2 & -4 & 1 \end{vmatrix} =\begin{vmatrix} -3 & 1 & 1 & -6 \\ -1 & 3 & 2 & 2 \\ 0 & -1 & 2 & -3 \\ 0 & 1 & -2 & 3 \end{vmatrix} \] We now add the third row to the fourth row: \[ \begin{vmatrix} -3 & 1 & 1 & -6 \\ -1 & 3 & 2 & 2 \\ 3 & -2 & 1 & 3 \\ 1 & -2 & -4 & 1 \end{vmatrix} =\begin{vmatrix} -3 & 1 & 1 & -6 \\ -1 & 3 & 2 & 2 \\ 0 & -1 & 2 & -3 \\ 0 & 0 & 0 & 0 \end{vmatrix} =0 \] The last determinant is \(0\) because it has a row of all \(0\)'s.

Compute \(\begin{vmatrix} 1 & 3 & 5 & 1 \\ 0 & 4 & 2 & 8 \\ 0 & 0 & 3 & 1 \\ 0 & 0 & 0 & 2 \end{vmatrix}\)

Use one of the theorems above.

The determinant is \(24\).

By Theorem 2, the determinant of a triangular matrix is the product of its diagonal entries: \[ \begin{vmatrix} 1 & 3 & 5 & 1 \\ 0 & 4 & 2 & 8 \\ 0 & 0 & 3 & 1 \\ 0 & 0 & 0 & 2 \end{vmatrix} =1\cdot4\cdot3\cdot2=24 \]

© MYMathApps

Supported in part by NSF Grant #1123255